14 research outputs found

    Decoding by Embedding: Correct Decoding Radius and DMT Optimality

    Get PDF
    The closest vector problem (CVP) and shortest (nonzero) vector problem (SVP) are the core algorithmic problems on Euclidean lattices. They are central to the applications of lattices in many problems of communications and cryptography. Kannan's \emph{embedding technique} is a powerful technique for solving the approximate CVP, yet its remarkable practical performance is not well understood. In this paper, the embedding technique is analyzed from a \emph{bounded distance decoding} (BDD) viewpoint. We present two complementary analyses of the embedding technique: We establish a reduction from BDD to Hermite SVP (via unique SVP), which can be used along with any Hermite SVP solver (including, among others, the Lenstra, Lenstra and Lov\'asz (LLL) algorithm), and show that, in the special case of LLL, it performs at least as well as Babai's nearest plane algorithm (LLL-aided SIC). The former analysis helps to explain the folklore practical observation that unique SVP is easier than standard approximate SVP. It is proven that when the LLL algorithm is employed, the embedding technique can solve the CVP provided that the noise norm is smaller than a decoding radius λ1/(2γ)\lambda_1/(2\gamma), where λ1\lambda_1 is the minimum distance of the lattice, and γ≈O(2n/4)\gamma \approx O(2^{n/4}). This substantially improves the previously best known correct decoding bound γ≈O(2n)\gamma \approx {O}(2^{n}). Focusing on the applications of BDD to decoding of multiple-input multiple-output (MIMO) systems, we also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff (DMT), and propose practical variants of embedding decoding which require no knowledge of the minimum distance of the lattice and/or further improve the error performance.Comment: To appear in IEEE Transactions on Information Theor

    Fully Homomorphic Encryption over the Integers Revisited

    Get PDF
    Two main computational problems serve as security foundations of current fully homomorphic encryption schemes: Regev\u27s Learning With Errors problem (LWE) and Howgrave-Graham\u27s Approximate Greatest Common Divisor problem (AGCD). Our first contribution is a reduction from LWE to AGCD. As a second contribution, we describe a new AGCD-based fully homomorphic encryption scheme, which outperforms all prior AGCD-based proposals: its security does not rely on the presumed hardness of the so-called Sparse Subset Sum problem, and the bit-length of a ciphertext is only softO(lambda), where lambda refers to the security parameter

    Fully Secure Functional Encryption for Inner Products, from Standard Assumptions

    Get PDF
    Functional encryption is a modern public-key paradigm where a master secret key can be used to derive sub-keys SKFSK_F associated with certain functions FF in such a way that the decryption operation reveals F(M)F(M), if MM is the encrypted message, and nothing else. Recently, Abdalla et al. gave simple and efficient realizations of the primitive for the computation of linear functions on encrypted data: given an encryption of a vector y⃗\vec{y} over some specified base ring, a secret key SKx⃗SK_{\vec{x}} for the vector x⃗\vec{x} allows computing ⟨x⃗,y⃗⟩\langle \vec{x} ,\vec{y} \rangle. Their technique surprisingly allows for instantiations under standard assumptions, like the hardness of the Decision Diffie-Hellman (DDH) and Learning-with-Errors (LWE) problems. Their constructions, however, are only proved secure against selective adversaries, which have to declare the challenge messages M0M_0 and M1M_1 at the outset of the game. In this paper, we provide constructions that provably achieve security against more realistic adaptive attacks (where the messages M0M_0 and M1M_1 may be chosen in the challenge phase, based on the previously collected information) for the same inner product functionality. Our constructions are obtained from hash proof systems endowed with homomorphic properties over the key space. They are (almost) as efficient as those of Abdalla et al. and rely on the same hardness assumptions. In addition, we obtain a solution based on Paillier\u27s composite residuosity assumption, which was an open problem even in the case of selective adversaries. We also propose \LWE-based schemes that allow evaluation of inner products modulo a prime pp, as opposed to the schemes of Abdalla et al. that are restricted to evaluations of integer inner products of short integer vectors. We finally propose a solution based on Paillier\u27s composite residuosityassumption that enables evaluation of inner products modulo an RSA integer N=pqN = pq. We demonstrate that the functionality of inner products over a prime field is powerful and can be used to construct bounded collusion FE for all circuits

    Lattice-Based Group Signatures with Logarithmic Signature Size

    Get PDF
    Group signatures are cryptographic primitives where users can anonymously sign messages in the name of a population they belong to. Gordon et al. (Asiacrypt 2010) suggested the first realization of group signatures based on lattice assumptions in the random oracle model. A significant drawback of their scheme is its linear signature size in the cardinality NN of the group. A recent extension proposed by Camenisch et al. (SCN 2012) suffers from the same overhead. In this paper, we describe the first lattice-based group signature schemes where the signature and public key sizes are essentially logarithmic in NN (for any fixed security level). Our basic construction only satisfies a relaxed definition of anonymity (just like the Gordon et al. system) but readily extends into a fully anonymous group signature (i.e., that resists adversaries equipped with a signature opening oracle). We prove the security of our schemes in the random oracle model under the SIS and LWE assumptions

    Cryptanalysis on the Multilinear Map over the Integers and its Related Problems

    Get PDF
    The CRT-ACD problem is to find the primes p_1,...,p_n given polynomially many instances of CRT_{(p_1,...,p_n)}(r_1,...,r_n) for small integers r_1,...,r_n. The CRT-ACD problem is regarded as a hard problem, but its hardness is not proven yet. In this paper, we analyze the CRT-ACD problem when given one more input CRT_{(p_1,...,p_n)}(x_0/p_1,...,x_0/p_n) for x_0=\prod\limits_{i=1}^n p_i and propose a polynomial-time algorithm for this problem by using products of the instances and auxiliary input. This algorithm yields a polynomial-time cryptanalysis of the (approximate) multilinear map of Coron, Lepoint and Tibouchi (CLT): We show that by multiplying encodings of zero with zero-testing parameters properly in the CLT scheme, one can obtain a required input of our algorithm: products of CRT-ACD instances and auxiliary input. This leads to a total break: all the quantities that were supposed to be kept secret can be recovered in an efficient and public manner. We also introduce polynomial-time algorithms for the Subgroup Membership, Decision Linear, and Graded External Diffie-Hellman problems, which are used as the base problems of several cryptographic schemes constructed on multilinear maps

    Improved security proofs in lattice-based cryptography: using the Rényi divergence rather than the statistical distance

    Get PDF
    The Rényi divergence is a measure of closeness of two probability distributions. We show that it can often be used as an alternative to the statistical distance in security proofs for lattice-based cryptography. Using the Rényi divergence is particularly suited for security proofs of primitives in which the attacker is required to solve a search problem (e.g., forging a signature). We show that it may also be used in the case of distinguishing problems (e.g., semantic security of encryption schemes), when they enjoy a public sampleability property. The techniques lead to security proofs for schemes with smaller parameters, and sometimes to simpler security proofs than the existing ones

    Algorithmique de la réduction de réseaux et application à la recherche de pires cas pour l'arrondi de fonctions mathématiques

    No full text
    Les réseaux euclidiens sont un outil très puissant dans plusieurs domaines de l'algorithmique, en cryptographie et en théorie algorithmique des nombres par exemple. L'objet du présent mémoire est dual : nous améliorons les algorithmes de réduction des réseaux, et nous développons une nouvelle application dans le domaine de l'arithmétique des ordinateurs. En ce qui concerne l'aspect algorithmique, nous étudions le cas des petites dimensions et décrivons une nouvelle variante de l'algorithme LLL. Du point de vue de l'application nous utilisons la méthode de Coppersmith permettant de trouver les petites racines de polynômes modulaires, pour calculer les pires cas pour l'arrondi des fonctions mathématiques, quand la fonction et la précision sont donnés. Nous adaptons notre technique aux mauvais cas simultanés pour deux fonctions. Ces deux méthodes sont des pré-calculs coûteux, qui une fois effectués permettent d'accélérer les implantations des fonctions élémentaires en précision fixe.Euclidean lattices are a powerful tool for several algorithmic topics, among which are cryptography and algorithmic number theory. The contributions of this thesis are twofold : we improve lattice basis reduction algorithms, and we introduce a new application of lattice reduction, in computer arithmetic. Concerning lattices, we consider both small dimensions and arbitrary dimensions, for which we improve the classical LLL algorithm. Concerning the application, we make use of Coppersmith's method for computing the small roots of multivariate modular polynomials, in order to find the worst cases for the rounding of mathematical functions, when the function, the rounding mode and the precision are fixed. We also generalise our technique to find input numbers that are simultaneously bad for two functions. These two methods are expensive pre-computations, but once performed, they help speeding up the implementations of elementary mathematical functions in fixed precision.NANCY1-SCD Sciences & Techniques (545782101) / SudocNANCY-INRIA Lorraine LORIA (545472304) / SudocSudocFranceF

    An LLL algorithm with quadratic complexity

    No full text
    30 page(s

    Cryptanalysis of Gu\u27s ideal multilinear map

    No full text
    In March, 2015 Gu Chunsheng proposed a candidate ideal multilinear map [eprint 2015/269]. An ideal multilinear map allows to perform as many multiplications as desired, while in k-multilinear maps like GGH [EC 2013] or CLT [CR2013, CR2015] one we canperform at most a predetermined number k of multiplications. In this note, we show that the extraction Multilinear Computational Diffie-Hellman problem (ext-MCDH) associated to Gu\u27s map can be solved in polynomial-time: this candidate ideal multilinear map is insecure. We also give intuition on why we think that the two other ideal multilinear maps proposed by Gu in [eprint 2015/269] are not secure either
    corecore